25 research outputs found

    Sparsity Promoting Regularization for Effective Noise Suppression in SPECT Image Reconstruction

    Get PDF
    The purpose of this research is to develop an advanced reconstruction method for low-count, hence high-noise, Single-Photon Emission Computed Tomography (SPECT) image reconstruction. It consists of a novel reconstruction model to suppress noise while conducting reconstruction and an efficient algorithm to solve the model. A novel regularizer is introduced as the nonconvex denoising term based on the approximate sparsity of the image under a geometric tight frame transform domain. The deblurring term is based on the negative log-likelihood of the SPECT data model. To solve the resulting nonconvex optimization problem a Preconditioned Fixed-point Proximity Algorithm (PFPA) is introduced. We prove that under appropriate assumptions, PFPA converges to a local solution of the optimization problem at a global O (1/k) convergence rate. Substantial numerical results for simulation data are presented to demonstrate the superiority of the proposed method in denoising, suppressing artifacts and reconstruction accuracy. We simulate noisy 2D SPECT data from two phantoms: hot Gaussian spheres on random lumpy warm background, and the anthropomorphic brain phantom, at high- and low-noise levels (64k and 90k counts, respectively), and reconstruct them using PFPA. We also perform limited comparative studies with selected competing state-of-the-art total variation (TV) and higher-order TV (HOTV) transform-based methods, and widely used post-filtered maximum-likelihood expectation-maximization. We investigate imaging performance of these methods using: Contrast-to-Noise Ratio (CNR), Ensemble Variance Images (EVI), Background Ensemble Noise (BEN), Normalized Mean-Square Error (NMSE), and Channelized Hotelling Observer (CHO) detectability. Each of the competing methods is independently optimized for each metric. We establish that the proposed method outperforms the other approaches in all image quality metrics except NMSE where it is matched by HOTV. The superiority of the proposed method is especially evident in the CHO detectability tests results. We also perform qualitative image evaluation for presence and severity of image artifacts where it also performs better in terms of suppressing staircase artifacts, as compared to TV methods. However, edge artifacts on high-contrast regions persist. We conclude that the proposed method may offer a powerful tool for detection tasks in high-noise SPECT imaging

    A Fast Convergent Ordered-Subsets Algorithm with Subiteration-Dependent Preconditioners for PET Image Reconstruction

    Full text link
    We investigated the imaging performance of a fast convergent ordered-subsets algorithm with subiteration-dependent preconditioners (SDPs) for positron emission tomography (PET) image reconstruction. In particular, we considered the use of SDP with the block sequential regularized expectation maximization (BSREM) approach with the relative difference prior (RDP) regularizer due to its prior clinical adaptation by vendors. Because the RDP regularization promotes smoothness in the reconstructed image, the directions of the gradients in smooth areas more accurately point toward the objective function's minimizer than those in variable areas. Motivated by this observation, two SDPs have been designed to increase iteration step-sizes in the smooth areas and reduce iteration step-sizes in the variable areas relative to a conventional expectation maximization preconditioner. The momentum technique used for convergence acceleration can be viewed as a special case of SDP. We have proved the global convergence of SDP-BSREM algorithms by assuming certain characteristics of the preconditioner. By means of numerical experiments using both simulated and clinical PET data, we have shown that the SDP-BSREM algorithms substantially improve the convergence rate, as compared to conventional BSREM and a vendor's implementation as Q.Clear. Specifically, SDP-BSREM algorithms converge 35\%-50\% faster in reaching the same objective function value than conventional BSREM and commercial Q.Clear algorithms. Moreover, we showed in phantoms with hot, cold and background regions that the SDP-BSREM algorithms approached the values of a highly converged reference image faster than conventional BSREM and commercial Q.Clear algorithms.Comment: 12 pages, 9 figure

    Quantitative Modeling of Cerenkov Light Production Efficiency from Medical Radionuclides

    Get PDF
    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and ÎČ particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use

    Estimating radiation effective doses from whole body computed tomography scans based on U.S. soldier patient height and weight

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The purpose of this study is to explore how a patient's height and weight can be used to predict the effective dose to a reference phantom with similar height and weight from a chest abdomen pelvis computed tomography scan when machine-based parameters are unknown. Since machine-based scanning parameters can be misplaced or lost, a predictive model will enable the medical professional to quantify a patient's cumulative radiation dose.</p> <p>Methods</p> <p>One hundred mathematical phantoms of varying heights and weights were defined within an x-ray Monte Carlo based software code in order to calculate organ absorbed doses and effective doses from a chest abdomen pelvis scan. Regression analysis was used to develop an effective dose predictive model. The regression model was experimentally verified using anthropomorphic phantoms and validated against a real patient population.</p> <p>Results</p> <p>Estimates of the effective doses as calculated by the predictive model were within 10% of the estimates of the effective doses using experimentally measured absorbed doses within the anthropomorphic phantoms. Comparisons of the patient population effective doses show that the predictive model is within 33% of current methods of estimating effective dose using machine-based parameters.</p> <p>Conclusions</p> <p>A patient's height and weight can be used to estimate the effective dose from a chest abdomen pelvis computed tomography scan. The presented predictive model can be used interchangeably with current effective dose estimating techniques that rely on computed tomography machine-based techniques.</p

    Toward a standard for the evaluation of PET-Auto-Segmentation methods following the recommendations of AAPM task group No. 211: Requirements and implementation

    Get PDF
    Purpose: The aim of this paper is to define the requirements and describe the design and implementation of a standard benchmark tool for evaluation and validation of PET-auto-segmentation (PET-AS) algorithms. This work follows the recommendations of Task Group 211 (TG211) appointed by the American Association of Physicists in Medicine (AAPM).Methods: The recommendations published in the AAPM TG211 report were used to derive a set of required features and to guide the design and structure of a benchmarking software tool. These items included the selection of appropriate representative data and reference contours obtained from established approaches and the description of available metrics. The benchmark was designed in a way that it could be extendable by inclusion of bespoke segmentation methods, while maintaining its main purpose of being a standard testing platform for newly developed PET-AS methods. An example of implementation of the proposed framework, named PETASset, was built. In this work, a selection of PET-AS methods representing common approaches to PET image segmentation was evaluated within PETASset for the purpose of testing and demonstrating the capabilities of the software as a benchmark platform.Results: A selection of clinical, physical, and simulated phantom data, including "best estimates" reference contours from macroscopic specimens, simulation template, and CT scans was built into the PETASset application database. Specific metrics such as Dice Similarity Coefficient (DSC), Positive Predictive Value (PPV), and Sensitivity (S), were included to allow the user to compare the results of any given PET-AS algorithm to the reference contours. In addition, a tool to generate structured reports on the evaluation of the performance of PET-AS algorithms against the reference contours was built. The variation of the metric agreement values with the reference contours across the PET-AS methods evaluated for demonstration were between 0.51 and 0.83, 0.44 and 0.86, and 0.61 and 1.00 for DSC, PPV, and the S metric, respectively. Examples of agreement limits were provided to show how the software could be used to evaluate a new algorithm against the existing state-of-the art.Conclusions: PETASset provides a platform that allows standardizing the evaluation and comparison of different PET-AS methods on a wide range of PET datasets. The developed platform will be available to users willing to evaluate their PET-AS methods and contribute with more evaluation datasets. </p

    Real-time data-driven motion correction in PET

    No full text
    Abstract PET imaging has been, and continues to be, an evolving diagnostic technology. In recent years, the modernizing digital landscape has opened new opportunities for data-driven innovation. One such facet has been data-driven motion correction (DDMC) in PET. As both research and industry propel this technology forward, we can recognize prospects and opportunities for further development. The concept of clinical practicality is supported by DDMC approaches—it is what sets them apart from traditional hardware-driven motion correction strategies that have largely not gained acceptance in routine diagnostic PET; the ease of use of DDMC may help propel acceptance of motion correction solutions in clinical practice. As we reflect on the present field, we should consider that DDMC can be made even more practical, and likely more impactful, if further developed to fit within a real-time acquisition framework. This vision for development is not new, but has been made more feasible with contemporary electronics, and has begun to be revisited in contemporary literature. The opportunities for development lie on a new forefront of innovation where medical physics integrates with engineering, data science, and modern computing capacities. Real-time DDMC is a systems integration challenge, and achieving it will require cooperation between hardware and software developers, and likely academia and industry. While challenges for development do exist, it is likely that we will see real-time DDMC come to fruition in the coming years. This effort may establish groundwork for developing similar innovations in the emerging digital innovation age
    corecore